Layer-wise Relevance Propagation (LRP)


By Prof. Hyunseok Oh
https://sddo.gist.ac.kr/
SDDO Lab at GIST

Table of Contents

Reference for this contents

Why we need XAI?

1.png

Making Deep Neural Nets Transparent

2.png

Layer-wise Relevance Propagation (LRP)

3.png

The basic assumptions and how LRP works are as follows:

3_.png

LRP with Tensorflow

Import Libary

Load ImageNet Data

imagenet.png

Load VGG16 Model

vgg.png

Tensorflow LRP Implementation Details

Consider the LRP propagation rule of Eq.: $$ R_j^{(l)} = a_j \sum_k {w^+_{jk} \over \sum_j a_j w^+_{jk} + b^+_k}R^{(l \space + \space 1)}_k $$

This rule can be decomposed as a sequence of four elementary computations, all of which can also be expressed in vector form:

$\textit{Element-wise}$

$$\begin{align*} z_k & \leftarrow \epsilon + \sum_j a_j w_{jk}^{+} \\ s_k & \leftarrow R_k \space / \space z_k \\ c_j & \leftarrow \sum_k w_{jk}^{+} s_k \\ R_j & \leftarrow a_j c_j \end{align*}$$

$\textit{Vector Form}$ $$\begin{align*} \mathbf{z} & \leftarrow W_{+}^{\top} \cdot \mathbf{a} \\ \mathbf{s} & \leftarrow \mathbf{R} \oslash \mathbf{z} \\ \mathbf{c} & \leftarrow W_{+} \cdot \mathbf{s} \\ \mathbf{R} & \leftarrow \mathbf{a} \odot \mathbf{c} \end{align*}$$

Display Images

Result Examples

result.png

APPENDIX